A Contribution to the Schlesinger's Algorithm Separating Mixtures of Gaussians

نویسندگان

  • Vojtech Franc
  • Václav Hlavác
چکیده

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An EM algorithm for convolutive independent component analysis

In this paper, we address the problem of blind separation of convolutive mixtures of spatially and temporally independent sources modeled with mixtures of Gaussians. We present an EM algorithm to compute Maximum Likelihood estimates of both the separating filters and the source density parameters, whereas in the state-of-the-art separating filters are usually estimated with gradient descent tec...

متن کامل

Learning Mixtures of Gaussians

Mixtures of Gaussians are among the most fundamental and widely used statistical models. Current techniques for learning such mixtures from data are local search heuristics with weak performance guarantees. We present the first provably correct algorithm for learning a mixture of Gaussians. This algorithm is very simple and returns the true centers of the Gaussians to within the precision speci...

متن کامل

EM for Spherical Gaussians

In this project, we examine two aspects of the behavior of the EM algorithm for mixtures of spherical Gaussians; 1) the benefit of spectral projection for such mixtures, and 2) the general behavior of the EM algorithm under certain separability criteria. Our current results are for mixtures of two Gaussians, although these can be extended. In the case of 1), we show that the value of the Q func...

متن کامل

PAC Learning Mixtures of Gaussians with No Separation Assumption

We propose and analyze a new vantage point for the learning of mixtures of Gaussians: namely, the PAC-style model of learning probability distributions introduced by Kearns et al. [12]. Here the task is to construct a hypothesis mixture of Gaussians that is statistically indistinguishable from the actual mixture generating the data; specifically, the KL divergence should be at most 2. In this s...

متن کامل

PAC Learning Mixtures of Axis-Aligned Gaussians with No Separation Assumption

We propose and analyze a new vantage point for the learning of mixtures of Gaussians: namely, the PAC-style model of learning probability distributions introduced by Kearns et al. [13]. Here the task is to construct a hypothesis mixture of Gaussians that is statistically indistinguishable from the actual mixture generating the data; specifically, the KL divergence should be at most ǫ. In this s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001